Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.881
Filtrar
1.
Front Psychol ; 15: 1300996, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38572198

RESUMO

Introduction: Emotional recognition from audio recordings is a rapidly advancing field, with significant implications for artificial intelligence and human-computer interaction. This study introduces a novel method for detecting emotions from short, 1.5 s audio samples, aiming to improve accuracy and efficiency in emotion recognition technologies. Methods: We utilized 1,510 unique audio samples from two databases in German and English to train our models. We extracted various features for emotion prediction, employing Deep Neural Networks (DNN) for general feature analysis, Convolutional Neural Networks (CNN) for spectrogram analysis, and a hybrid model combining both approaches (C-DNN). The study addressed challenges associated with dataset heterogeneity, language differences, and the complexities of audio sample trimming. Results: Our models demonstrated accuracy significantly surpassing random guessing, aligning closely with human evaluative benchmarks. This indicates the effectiveness of our approach in recognizing emotional states from brief audio clips. Discussion: Despite the challenges of integrating diverse datasets and managing short audio samples, our findings suggest considerable potential for this methodology in real-time emotion detection from continuous speech. This could contribute to improving the emotional intelligence of AI and its applications in various areas.

2.
Front Psychol ; 15: 1315682, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38596340

RESUMO

Previous evidence suggested that chronic pain is characterized by cognitive deficits, particularly in the social cognition domain. Recently, a new chronic pain classification has been proposed distinguishing chronic primary pain (CPP), in which pain is the primary cause of patients' disease, and chronic secondary pain (CSP), in which pain is secondary to an underlying illness. The present study aimed at investigating social cognition profiles in the two disorders. We included 38 CPP, 43 CSP patients, and 41 healthy controls (HC). Social cognition was assessed with the Ekman-60 faces test (Ekman-60F) and the Story-Based Empathy Task (SET), whereas global cognitive functioning was measured with the Montreal Cognitive Assessment (MoCA). Pain and mood symptoms, coping strategies, and alexithymia were also evaluated. Correlations among clinical pain-related measures, cognitive performance, and psychopathological features were investigated. Results suggested that CSP patients were impaired compared to CPP and HC in social cognition abilities, while CPP and HC performance was not statistically different. Pain intensity and illness duration did not correlate with cognitive performance or psychopathological measures. These findings confirmed the presence of social cognition deficits in chronic pain patients, suggesting for the first time that such impairment mainly affects CSP patients, but not CPP. We also highlighted the importance of measuring global cognitive functioning when targeting chronic pain disorders. Future research should further investigate the cognitive and psychopathological profile of CPP and CSP patients to clarify whether present findings can be generalized as disorder characteristics.

3.
J Neural Eng ; 21(2)2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38565099

RESUMO

Objective.The study of emotion recognition through electroencephalography (EEG) has garnered significant attention recently. Integrating EEG with other peripheral physiological signals may greatly enhance performance in emotion recognition. Nonetheless, existing approaches still suffer from two predominant challenges: modality heterogeneity, stemming from the diverse mechanisms across modalities, and fusion credibility, which arises when one or multiple modalities fail to provide highly credible signals.Approach.In this paper, we introduce a novel multimodal physiological signal fusion model that incorporates both intra-inter modality reconstruction and sequential pattern consistency, thereby ensuring a computable and credible EEG-based multimodal emotion recognition. For the modality heterogeneity issue, we first implement a local self-attention transformer to obtain intra-modal features for each respective modality. Subsequently, we devise a pairwise cross-attention transformer to reveal the inter-modal correlations among different modalities, thereby rendering different modalities compatible and diminishing the heterogeneity concern. For the fusion credibility issue, we introduce the concept of sequential pattern consistency to measure whether different modalities evolve in a consistent way. Specifically, we propose to measure the varying trends of different modalities, and compute the inter-modality consistency scores to ascertain fusion credibility.Main results.We conduct extensive experiments on two benchmarked datasets (DEAP and MAHNOB-HCI) with the subject-dependent paradigm. For the DEAP dataset, our method improves the accuracy by 4.58%, and the F1 score by 0.63%, compared to the state-of-the-art baseline. Similarly, for the MAHNOB-HCI dataset, our method improves the accuracy by 3.97%, and the F1 score by 4.21%. In addition, we gain much insight into the proposed framework through significance test, ablation experiments, confusion matrices and hyperparameter analysis. Consequently, we demonstrate the effectiveness of the proposed credibility modelling through statistical analysis and carefully designed experiments.Significance.All experimental results demonstrate the effectiveness of our proposed architecture and indicate that credibility modelling is essential for multimodal emotion recognition.


Assuntos
Benchmarking , Emoções , Fontes de Energia Elétrica , Eletroencefalografia , Reconhecimento Psicológico
4.
Cogn Emot ; : 1-15, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38576358

RESUMO

Wearing facial masks became a common practice worldwide during the COVID-19 pandemic. This study investigated (1) whether facial masks that cover adult faces affect 4- to 6-year-old children's recognition of emotions in those faces and (2) whether the duration of children's exposure to masks is associated with emotion recognition. We tested children from Switzerland (N = 38) and Brazil (N = 41). Brazil represented longer mask exposure due to a stricter mandate during COVID-19. Children had to choose a face displaying a specific emotion (happy, angry, or sad) when the face wore either no cover, a facial mask, or sunglasses. The longer hours of mask exposure were associated with better emotion recognition. Controlling for the hours of exposure, children were less likely to recognise emotions in partially hideen faces. Moreover, Brazilian children were more accurate in recognising happy faces than Swiss children. Overall, facial masks may negatively impact children's emotion recognition. However, prolonged exposure appears to buffer the lack of facial cues from the nose and mouth. In conclusion, restricting facial cues due to masks may impair kindergarten children's emotion recognition in the short run. However, it may facilitate their broader reading of facial emotional cues in the long run.

5.
J Psychiatr Res ; 173: 333-339, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38579478

RESUMO

BACKGROUND: Inflammation impairs cognitive function in healthy individuals and people with psychiatric disorders, such as bipolar disorder (BD). This effect may also impact emotion recognition, a fundamental element of social cognition. Our study aimed to investigate the relationships between pro-inflammatory cytokines and emotion recognition in euthymic BD patients and healthy controls (HCs). METHODS: We recruited forty-four euthymic BD patients and forty healthy controls (HCs) and measured their inflammatory markers, including high-sensitivity C-reactive protein (hs-CRP), interleukin-6 (IL-6), and TNF-α. We applied validated cognitive tasks, the Wisconsin Card-Sorting Test (WCST) and Continuous Performance Test (CPT), and a social cognitive task for emotion recognition, Diagnostic Analyses of Nonverbal Accuracy, Taiwanese Version (DANVA-2-TW). We analyzed the relationships between cytokines and cognition and then explored possible predictive factors of sadness recognition accuracy. RESULTS: Regarding pro-inflammatory cytokines, TNF-α was elevated in euthymic BD patients relative to HCs. In euthymic BD patients only, higher TNF-α levels were associated with lower accuracy of sadness recognition. Regression analysis revealed that TNF-α was an independent predictive factor of sadness recognition in patients with euthymic BD when neurocognition was controlled for. CONCLUSIONS: We demonstrated that enhanced inflammation, indicated by increased TNF-α, was an independent predictive factor of impaired sadness recognition in BD patients but not in HCs. Our findings suggested a direct influence of TNF-α on sadness recognition and indicated vulnerability to depression in euthymic BD patients with chronic inflammation.


Assuntos
Transtorno Bipolar , Humanos , Transtorno Bipolar/metabolismo , Tristeza , Fator de Necrose Tumoral alfa , Citocinas , Inflamação
6.
J Neurosci Methods ; 406: 110129, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38614286

RESUMO

The integration of emotional intelligence in machines is an important step in advancing human-computer interaction. This demands the development of reliable end-to-end emotion recognition systems. However, the scarcity of public affective datasets presents a challenge. In this literature review, we emphasize the use of generative models to address this issue in neurophysiological signals, particularly Electroencephalogram (EEG) and Functional Near-Infrared Spectroscopy (fNIRS). We provide a comprehensive analysis of different generative models used in the field, examining their input formulation, deployment strategies, and methodologies for evaluating the quality of synthesized data. This review serves as a comprehensive overview, offering insights into the advantages, challenges, and promising future directions in the application of generative models in emotion recognition systems. Through this review, we aim to facilitate the progression of neurophysiological data augmentation, thereby supporting the development of more efficient and reliable emotion recognition systems.

7.
Front Behav Neurosci ; 18: 1302916, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38566859

RESUMO

Introduction: Schizophrenia (SCZ) is a complex neurodevelopmental disorder characterised by functional and structural brain dysconnectivity and disturbances in perception, cognition, emotion, and social functioning. In the present study, we investigated whether the microstructural organisation of the uncinate fasciculus (UF) was associated with emotion recognition (ER) performance. Additionally, we investigated the usefulness of an unbiased hit rate (UHR) score to control for response biases (i.e., participant guessing) during an emotion recognition task (ERT). Methods: Fifty-eight individuals diagnosed with SCZ were included. The CANTAB ERT was used to measure social cognition. Specific ROI manual tract segmentation was completed using ExploreDTI and followed the protocol previously outlined by Coad et al. (2020). Results: We found that the microstructural organisation of the UF was significantly correlated with physical neglect and ER outcomes. Furthermore, we found that the UHR score was more sensitive to ERT subscale emotion items than the standard HR score. Finally, given the association between childhood trauma (in particular childhood neglect) and social cognition in SCZ, a mediation analysis found evidence that microstructural alterations of the UF mediated an association between childhood trauma and social cognitive performance. Discussion: The mediating role of microstructural alterations in the UF on the association between childhood trauma and social cognitive performance suggests that early life adversity impacts both brain development and social cognitive outcomes for people with SCZ. Limitations of the present study include the restricted ability of the tensor model to correctly assess multi-directionality at regions where fibre populations intersect.

8.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610412

RESUMO

Classical machine learning techniques have dominated Music Emotion Recognition. However, improvements have slowed down due to the complex and time-consuming task of handcrafting new emotionally relevant audio features. Deep learning methods have recently gained popularity in the field because of their ability to automatically learn relevant features from spectral representations of songs, eliminating such necessity. Nonetheless, there are limitations, such as the need for large amounts of quality labeled data, a common problem in MER research. To understand the effectiveness of these techniques, a comparison study using various classical machine learning and deep learning methods was conducted. The results showed that using an ensemble of a Dense Neural Network and a Convolutional Neural Network architecture resulted in a state-of-the-art 80.20% F1 score, an improvement of around 5% considering the best baseline results, concluding that future research should take advantage of both paradigms, that is, combining handcrafted features with feature learning.


Assuntos
Aprendizado Profundo , Música , Confiabilidade dos Dados , Emoções , Aprendizado de Máquina
9.
Front Hum Neurosci ; 18: 1324897, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38617132

RESUMO

Music is one of the primary ways to evoke human emotions. However, the feeling of music is subjective, making it difficult to determine which emotions music triggers in a given individual. In order to correctly identify emotional problems caused by different types of music, we first created an electroencephalogram (EEG) data set stimulated by four different types of music (fear, happiness, calm, and sadness). Secondly, the differential entropy features of EEG were extracted, and then the emotion recognition model CNN-SA-BiLSTM was established to extract the temporal features of EEG, and the recognition performance of the model was improved by using the global perception ability of the self-attention mechanism. The effectiveness of the model was further verified by the ablation experiment. The classification accuracy of this method in the valence and arousal dimensions is 93.45% and 96.36%, respectively. By applying our method to a publicly available EEG dataset DEAP, we evaluated the generalization and reliability of our method. In addition, we further investigate the effects of different EEG bands and multi-band combinations on music emotion recognition, and the results confirm relevant neuroscience studies. Compared with other representative music emotion recognition works, this method has better classification performance, and provides a promising framework for the future research of emotion recognition system based on brain computer interface.

10.
J Neurol Sci ; 460: 123019, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38640582

RESUMO

OBJECTIVE: The aim of our study was to measure the ability of ALS patients to process dynamic facial expressions as compared to a control group of healthy subjects and to correlate this ability in ALS patients with neuropsychological, clinical and neurological measures of the disease. METHODS: Sixty-three ALS patients and 47 healthy controls were recruited. All the ALS patients also underwent i) the Geneva Emotion Recognition Test (GERT) in which ten actors express 14 types of dynamic emotions in brief video clips with audio, ii) the Edimburgh Cognitive and Behavioral ALS Screen (ECAS) test; iii) the ALS Functional Rating Scale Revised (ALSFRS-R) and iv) the Medical Research Council (MRC) for the evaluation of muscle strength. All the healthy subjects enrolled in the study underwent the GERT. RESULTS: The recognition of irritation and pleasure was significantly different between ALS patients and the control group. The amusement, despair, irritation, joy, sadness and surprise had been falsely recognized differently between the two groups. Specific ALS cognitive impairment was associated with bulbar-onset phenotype (OR = 14,3889; 95%CI = 3,96-52,16). No association was observed between false emotion recognition and cognitive impairment (F(1,60)=,56,971, p=,45,333). The number of categorical errors was significantly higher in the ALS patients than in the control group (27,66 ± 7,28 vs 17,72 ± 5,29; t = 8723; p = 0.001). CONCLUSIONS: ALS patients show deficits in the dynamic processing of a wide range of emotions. These deficits are not necessarily associated with a decline in higher cognitive functions: this could therefore lead to an underestimation of the phenomenon.

11.
Neurosci Biobehav Rev ; 161: 105674, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38614451

RESUMO

This review delves into the phenomenon of positive emotional contagion (PEC) in rodents, an area that remains relatively understudied compared to the well-explored realm of negative emotions such as fear or pain. Rodents exhibit clear preferences for individuals expressing positive emotions over neutral counterparts, underscoring the importance of detecting and responding to positive emotional signals from others. We thoroughly examine the adaptive function of PEC, highlighting its pivotal role in social learning and environmental adaptation. The developmental aspect of the ability to interpret positive emotions is explored, intricately linked to maternal care and social interactions, with oxytocin playing a central role in these processes. We discuss the potential involvement of the reward system and draw attention to persisting gaps in our understanding of the neural mechanisms governing PEC. Presenting a comprehensive overview of the existing literature, we focus on food-related protocols such as the Social Transmission of Food Preferences paradigm and tickling behaviour. Our review emphasizes the pressing need for further research to address lingering questions and advance our comprehension of positive emotional contagion.

12.
Comput Biol Med ; 174: 108445, 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38603901

RESUMO

Transfer learning (TL) has demonstrated its efficacy in addressing the cross-subject domain adaptation challenges in affective brain-computer interfaces (aBCI). However, previous TL methods usually use a stationary distance, such as Euclidean distance, to quantify the distribution dissimilarity between two domains, overlooking the inherent links among similar samples, potentially leading to suboptimal feature mapping. In this study, we introduced a novel algorithm called multi-source manifold metric transfer learning (MSMMTL) to enhance the efficacy of conventional TL. Specifically, we first selected the source domain based on Mahalanobis distance to enhance the quality of the source domains and then used manifold feature mapping approach to map the source and target domains on the Grassmann manifold to mitigate data drift between domains. In this newly established shared space, we optimized the Mahalanobis metric by maximizing the inter-class distances while minimizing the intra-class distances in the target domain. Recognizing that significant distribution discrepancies might persist across different domains even on the manifold, to ensure similar distributions between the source and target domains, we further imposed constraints on both domains under the Mahalanobis metric. This approach aims to reduce distributional disparities and enhance the electroencephalogram (EEG) emotion recognition performance. In cross-subject experiments, the MSMMTL model exhibits average classification accuracies of 88.83 % and 65.04 % for SEED and DEAP, respectively, underscoring the superiority of our proposed MSMMTL over other state-of-the-art methods. MSMMTL can effectively solve the problem of individual differences in EEG-based affective computing.

13.
Schizophr Res ; 267: 330-340, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38613864

RESUMO

Deficits in social cognition (SC) interfere with recovery in schizophrenia (SZ) and may be related to resting state brain connectivity. This study aimed at assessing the alterations in the relationship between resting state functional connectivity and the social-cognitive abilities of patients with SZ compared to healthy subjects. We divided the brain into 246 regions of interest (ROI) following the Human Healthy Volunteers Brainnetome Atlas. For each participant, we calculated the resting-state functional connectivity (rsFC) in terms of degree centrality (DC), which evaluates the total strength of the most powerful coactivations of every ROI with all other ROIs during rest. The rs-DC of the ROIs was correlated with five measures of SC assessing emotion processing and mentalizing in 45 healthy volunteers (HVs) chosen as a normative sample. Then, controlling for symptoms severity, we verified whether these significant associations were altered, i.e., absent or of opposite sign, in 55 patients with SZ. We found five significant differences between SZ patients and HVs: in the patients' group, the correlations between emotion recognition tasks and rsFC of the right entorhinal cortex (R-EC), left superior parietal lobule (L-SPL), right caudal hippocampus (R-c-Hipp), and the right caudal (R-c) and left rostral (L-r) middle temporal gyri (MTG) were lost. An altered resting state functional connectivity of the L-SPL, R-EC, R-c-Hipp, and bilateral MTG in patients with SZ may be associated with impaired emotion recognition. If confirmed, these results may enhance the development of non-invasive brain stimulation interventions targeting those cerebral regions to reduce SC deficit in SZ.

14.
PeerJ Comput Sci ; 10: e1977, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660191

RESUMO

Emotional recognition is a pivotal research domain in computer and cognitive science. Recent advancements have led to various emotion recognition methods, leveraging data from diverse sources like speech, facial expressions, electroencephalogram (EEG), electrocardiogram, and eye tracking (ET). This article introduces a novel emotion recognition framework, primarily targeting the analysis of users' psychological reactions and stimuli. It is important to note that the stimuli eliciting emotional responses are as critical as the responses themselves. Hence, our approach synergizes stimulus data with physical and physiological signals, pioneering a multimodal method for emotional cognition. Our proposed framework unites stimulus source data with physiological signals, aiming to enhance the accuracy and robustness of emotion recognition through data integration. We initiated an emotional cognition experiment to gather EEG and ET data alongside recording emotional responses. Building on this, we developed the Emotion-Multimodal Fusion Neural Network (E-MFNN), optimized for multimodal data fusion to process both stimulus and physiological data. We conducted extensive comparisons between our framework's outcomes and those from existing models, also assessing various algorithmic approaches within our framework. This comparison underscores our framework's efficacy in multimodal emotion recognition. The source code is publicly available at https://figshare.com/s/8833d837871c78542b29.

15.
PeerJ Comput Sci ; 10: e1912, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660202

RESUMO

Multimodal emotion recognition techniques are increasingly essential for assessing mental states. Image-based methods, however, tend to focus predominantly on overt visual cues and often overlook subtler mental state changes. Psychophysiological research has demonstrated that heart rate (HR) and skin temperature are effective in detecting autonomic nervous system (ANS) activities, thereby revealing these subtle changes. However, traditional HR tools are generally more costly and less portable, while skin temperature analysis usually necessitates extensive manual processing. Advances in remote photoplethysmography (r-PPG) and automatic thermal region of interest (ROI) detection algorithms have been developed to address these issues, yet their accuracy in practical applications remains limited. This study aims to bridge this gap by integrating r-PPG with thermal imaging to enhance prediction performance. Ninety participants completed a 20-min questionnaire to induce cognitive stress, followed by watching a film aimed at eliciting moral elevation. The results demonstrate that the combination of r-PPG and thermal imaging effectively detects emotional shifts. Using r-PPG alone, the prediction accuracy was 77% for cognitive stress and 61% for moral elevation, as determined by a support vector machine (SVM). Thermal imaging alone achieved 79% accuracy for cognitive stress and 78% for moral elevation, utilizing a random forest (RF) algorithm. An early fusion strategy of these modalities significantly improved accuracies, achieving 87% for cognitive stress and 83% for moral elevation using RF. Further analysis, which utilized statistical metrics and explainable machine learning methods including SHapley Additive exPlanations (SHAP), highlighted key features and clarified the relationship between cardiac responses and facial temperature variations. Notably, it was observed that cardiovascular features derived from r-PPG models had a more pronounced influence in data fusion, despite thermal imaging's higher predictive accuracy in unimodal analysis.

17.
BMC Psychiatry ; 24(1): 307, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654234

RESUMO

BACKGROUND: Obstructive sleep apnea-hypopnea syndrome (OSAHS) is a chronic breathing disorder characterized by recurrent upper airway obstruction during sleep. Although previous studies have shown a link between OSAHS and depressive mood, the neurobiological mechanisms underlying mood disorders in OSAHS patients remain poorly understood. This study aims to investigate the emotion processing mechanism in OSAHS patients with depressive mood using event-related potentials (ERPs). METHODS: Seventy-four OSAHS patients were divided into the depressive mood and non-depressive mood groups according to their Self-rating Depression Scale (SDS) scores. Patients underwent overnight polysomnography and completed various cognitive and emotional questionnaires. The patients were shown facial images displaying positive, neutral, and negative emotions and tasked to identify the emotion category, while their visual evoked potential was simultaneously recorded. RESULTS: The two groups did not differ significantly in age, BMI, and years of education, but showed significant differences in their slow wave sleep ratio (P = 0.039), ESS (P = 0.006), MMSE (P < 0.001), and MOCA scores (P = 0.043). No significant difference was found in accuracy and response time on emotional face recognition between the two groups. N170 latency in the depressive group was significantly longer than the non-depressive group (P = 0.014 and 0.007) at the bilateral parieto-occipital lobe, while no significant difference in N170 amplitude was found. No significant difference in P300 amplitude or latency between the two groups. Furthermore, N170 amplitude at PO7 was positively correlated with the arousal index and negatively with MOCA scores (both P < 0.01). CONCLUSION: OSAHS patients with depressive mood exhibit increased N170 latency and impaired facial emotion recognition ability. Special attention towards the depressive mood among OSAHS patients is warranted for its implications for patient care.


Assuntos
Depressão , Emoções , Apneia Obstrutiva do Sono , Humanos , Masculino , Pessoa de Meia-Idade , Apneia Obstrutiva do Sono/fisiopatologia , Apneia Obstrutiva do Sono/psicologia , Apneia Obstrutiva do Sono/complicações , Depressão/fisiopatologia , Depressão/psicologia , Depressão/complicações , Feminino , Adulto , Emoções/fisiologia , Polissonografia , Potenciais Evocados/fisiologia , Eletroencefalografia , Reconhecimento Facial/fisiologia , Potenciais Evocados Visuais/fisiologia , Expressão Facial
18.
J Neural Eng ; 21(2)2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38588700

RESUMO

Objective. The instability of the EEG acquisition devices may lead to information loss in the channels or frequency bands of the collected EEG. This phenomenon may be ignored in available models, which leads to the overfitting and low generalization of the model.Approach. Multiple self-supervised learning tasks are introduced in the proposed model to enhance the generalization of EEG emotion recognition and reduce the overfitting problem to some extent. Firstly, channel masking and frequency masking are introduced to simulate the information loss in certain channels and frequency bands resulting from the instability of EEG, and two self-supervised learning-based feature reconstruction tasks combining masked graph autoencoders (GAE) are constructed to enhance the generalization of the shared encoder. Secondly, to take full advantage of the complementary information contained in these two self-supervised learning tasks to ensure the reliability of feature reconstruction, a weight sharing (WS) mechanism is introduced between the two graph decoders. Thirdly, an adaptive weight multi-task loss (AWML) strategy based on homoscedastic uncertainty is adopted to combine the supervised learning loss and the two self-supervised learning losses to enhance the performance further.Main results. Experimental results on SEED, SEED-V, and DEAP datasets demonstrate that: (i) Generally, the proposed model achieves higher averaged emotion classification accuracy than various baselines included in both subject-dependent and subject-independent scenarios. (ii) Each key module contributes to the performance enhancement of the proposed model. (iii) It achieves higher training efficiency, and significantly lower model size and computational complexity than the state-of-the-art (SOTA) multi-task-based model. (iv) The performances of the proposed model are less influenced by the key parameters.Significance. The introduction of the self-supervised learning task helps to enhance the generalization of the EEG emotion recognition model and eliminate overfitting to some extent, which can be modified to be applied in other EEG-based classification tasks.


Assuntos
Eletroencefalografia , Emoções , Aprendizado de Máquina Supervisionado , Aprendizado de Máquina Supervisionado/normas , Conjuntos de Dados como Assunto , Humanos
19.
Schizophr Bull Open ; 5(1): sgae007, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38617732

RESUMO

Background and Hypothesis: People with serious mental illness (SMI; psychotic and affective disorders with psychosis) are at an increased risk of suicide, yet there is limited research on the correlates of suicide in SMI. Social cognitive impairments are common among people with SMI and several studies have examined social cognition and suicidal ideation (SI) and behavior. This systematic review aims to evaluate the links between various domains of social cognition, SI, and suicidal behavior in SMI. Study Design: Electronic databases (PubMed and PsycInfo) were searched through June 2023. Records obtained through this search (N = 618) were screened by 2 independent reviewers according to inclusion criteria. Relevant data were extracted, and study quality was assessed. Study Results: Studies (N = 16) from 12 independent samples were included in the systematic review (N = 2631, sample sizes ranged from N = 20 to N = 593). Assessments of social cognition and SI and behavior varied widely between studies. Broadly, effects were mixed. Better emotion recognition of negative affect was linked to SI and a history of suicide attempts, though there is little consistent evidence for the relationship of emotion recognition and SI or behavior. On the other hand, better theory of mind ability was linked to SI and a history of suicide attempts. Furthermore, negative attributional bias was linked to current SI, but not a history of SI or attempt. Conclusions: This review suggests mixed associations between social cognition, SI, and behavior in SMI. Future research should evaluate additional mediators and moderators of social cognition and suicide, employing prospective designs.

20.
Mult Scler Relat Disord ; 86: 105603, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38583368

RESUMO

BACKGROUND: Multiple sclerosis (MS) negatively impacts cognition and has been associated with deficits in social cognition, including emotion recognition. There is a lack of research examining emotion recognition from multiple modalities in MS. The present study aimed to employ a clinically available measure to assess multimodal emotion recognition abilities among individuals with MS. METHOD: Thirty-one people with MS and 21 control participants completed the Advanced Clinical Solutions Social Perceptions Subtest (ACS-SP), BICAMS, and measures of premorbid functioning, mood, and fatigue. ANCOVAs examined group differences in all outcomes while controlling for education. Correlational analyses examined potential correlates of emotion recognition in both groups. RESULTS: The MS group performed significantly worse on the ACS-SP than the control group, F(1, 49) = 5.32, p = .025. Significant relationships between emotion recognition and cognitive functions were found only in the MS group, namely for information processing speed (r = 0.59, p < .001), verbal learning (r = 0.52, p = .003) and memory (r = 0.65, p < 0.001), and visuospatial learning (r = 0.62, p < 0.001) and memory (r = 0.52, p = .003). Emotion recognition did not correlate with premorbid functioning, mood, or fatigue in either group. CONCLUSIONS: This study was the first to employ the ACS-SP to assess emotion recognition in MS. The results suggest that emotion recognition is impacted in MS and is related to other cognitive processes, such as information processing speed. The results provide information for clinicians amidst calls to include social cognition measures in standard MS assessments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...